Quasi-Newton methods for machine learning: forget the past, just sample
نویسندگان
چکیده
We present two sampled quasi-Newton methods (sampled LBFGS and LSR1) for solving empirical risk minimization problems that arise in machine learning. Contrary to the classical variants of these sequentially build Hessian or inverse approximations as optimization progresses, our proposed sample points randomly around current iterate at every iteration produce approximations. As a result, constructed make use more reliable (recent local) information do not depend on past could be significantly stale. Our algorithms are efficient terms accessed data (epochs) have enough concurrency take advantage parallel/distributed computing environments. provide convergence guarantees methods. Numerical tests toy classification problem well popular benchmarking binary neural network training tasks reveal outperform their variants.
منابع مشابه
New Quasi-Newton Optimization Methods for Machine Learning
This thesis develops new quasi-Newton optimization methods that exploit the wellstructured functional form of objective functions often encountered in machine learning, while still maintaining the solid foundation of the standard BFGS quasi-Newton method. In particular, our algorithms are tailored for two categories of machine learning problems: (1) regularized risk minimization problems with c...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کاملProjected Newton-type Methods in Machine Learning
We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are m...
متن کاملOn the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization
We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...
متن کاملProximal Quasi-Newton Methods for Convex Optimization
In [19], a general, inexact, e cient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Optimization Methods & Software
سال: 2021
ISSN: ['1055-6788', '1026-7670', '1029-4937']
DOI: https://doi.org/10.1080/10556788.2021.1977806